Descentwise inexact proximal algorithms for smooth optimization

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Descentwise inexact proximal algorithms for smooth optimization

The proximal method is a standard regularization approach in optimization. Practical implementations of this algorithm require (i) an algorithm to compute the proximal point, (ii) a rule to stop this algorithm, (iii) an update formula for the proximal parameter. In this work we focus on (ii), when smoothness is present – so that Newton-like methods can be used for (i): we aim at giving adequate...

متن کامل

An inexact and nonmonotone proximal method for smooth unconstrained minimization

An implementable proximal point algorithm is established for the smooth nonconvex unconstrained minimization problem. At each iteration, the algorithm minimizes approximately a general quadratic by a truncated strategy with step length control. The main contributions are: (i) a framework for updating the proximal parameter; (ii) inexact criteria for approximately solving the subproblems; (iii) ...

متن کامل

Inexact and accelerated proximal point algorithms

We present inexact accelerated proximal point algorithms for minimizing a proper lower semicontinuous and convex function. We carry on a convergence analysis under different types of errors in the evaluation of the proximity operator, and we provide corresponding convergence rates for the objective function values. The proof relies on a generalization of the strategy proposed in [14] for genera...

متن کامل

Inexact Proximal Gradient Methods for Non-convex and Non-smooth Optimization

Non-convex and non-smooth optimization plays an important role in machine learning. Proximal gradient method is one of the most important methods for solving the nonconvex and non-smooth problems, where a proximal operator need to be solved exactly for each step. However, in a lot of problems the proximal operator does not have an analytic solution, or is expensive to obtain an exact solution. ...

متن کامل

Inexact proximal stochastic gradient method for convex composite optimization

We study an inexact proximal stochastic gradient (IPSG) method for convex composite optimization, whose objective function is a summation of an average of a large number of smooth convex functions and a convex, but possibly nonsmooth, function. Variance reduction techniques are incorporated in the method to reduce the stochastic gradient variance. The main feature of this IPSG algorithm is to a...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Computational Optimization and Applications

سال: 2012

ISSN: 0926-6003,1573-2894

DOI: 10.1007/s10589-012-9461-3